Localist Attractor Networks Submitted to: Neural Computation

نویسندگان

  • Richard S. Zemel
  • Michael C. Mozer
چکیده

Attractor networks, which map an input space to a discrete output space, are useful for pattern completion—cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attractor net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters. We present simulation experiments that explore the behavior of localist attractor networks, showing that they yield few spurious attractors, and they readily exhibit two desirable properties of psychological and neurobiological models, priming—faster convergence to an attractor if the attractor has been recently visited—and gang effects—inwhich the presence of an attractor enhances the attractor basins of neighboring attractors.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Localist Attractor Networks

Attractor networks, which map an input space to a discrete output space, are useful for pattern completion--cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection...

متن کامل

Analog-symbolic memory that tracks via reconsolidation

A fundamental part of a computational system is its memory, which is used to store and retrieve data. Classical computer memories rely on the static approach and are very different from human memories. Neural network memories are based on auto-associative attractor dynamics and thus provide a high level of pattern completion. However, they are not used in general computation since there are pra...

متن کامل

A Generative Model for Attractor Dynamics

Attractor networks, which map an input space to a discrete output space, are useful for pattern completion. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious afuactors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding o...

متن کامل

Reducing connectivity by using cortical modular bands

The way information is represented and processed in a neural network may have important consequences on its computational power and complexity. Basically, information representation refers to distributed or localist encoding and information processing refers to schemes of connectivity that can be complete or minimal. In the past, theoretical and biologically inspired approaches of neural comput...

متن کامل

Network Capacity for Latent Attractor Computation

Attractor networks have been one of the most successful paradigms in neural computation and have been used as models of computation in the nervous system Many experimentally observed phenomena such as coherent population codes contextual representations and replay of learned neural activity patterns are explained well by attractor dynamics Recently we proposed a paradigm called latent attractor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007